attribution preservation
Supplementary Material: Attribution Preservation in Network Compression for Reliable Network Interpretation
ImageNet class labels - the class labels are unusable. In the fine-tuning phase, the pruned network is fine-tuned for 10 epochs with batch size 180. We conduct experiments for structured pruning methods on ImageNet. We observe same tendencies in the results (Table 4). Our method outperforms naive compression in terms of maintaining the attribution maps.
- North America > Canada (0.04)
- Asia > South Korea (0.04)
Attribution Preservation in Network Compression for Reliable Network Interpretation
Neural networks embedded in safety-sensitive applications such as self-driving cars and wearable health monitors rely on two important techniques: input attribution for hindsight analysis and network compression to reduce its size for edge-computing. In this paper, we show that these seemingly unrelated techniques conflict with each other as network compression deforms the produced attributions, which could lead to dire consequences for mission-critical applications. This phenomenon arises due to the fact that conventional network compression methods only preserve the predictions of the network while ignoring the quality of the attributions. To combat the attribution inconsistency problem, we present a framework that can preserve the attributions while compressing a network. By employing the Weighted Collapsed Attribution Matching regularizer, we match the attribution maps of the network being compressed to its pre-compression former self.